Partial differential equations (PDEs) are important tools to model physical systems, and including them into machine learning models is an important way of incorporating physical knowledge. Given any system of linear PDEs with constant coefficients, we propose a family of Gaussian process (GP) priors, which we call EPGP, such that all realizations are exact solutions of this system. We apply the Ehrenpreis-Palamodov fundamental principle, which works like a non-linear Fourier transform, to construct GP kernels mirroring standard spectral methods for GPs. Our approach can infer probable solutions of linear PDE systems from any data such as noisy measurements, or initial and boundary conditions. Constructing EPGP-priors is algorithmic, generally applicable, and comes with a sparse version (S-EPGP) that learns the relevant spectral frequencies and works better for big data sets. We demonstrate our approach on three families of systems of PDE, the heat equation, wave equation, and Maxwell's equations, where we improve upon the state of the art in computation time and precision, in some experiments by several orders of magnitude.
translated by 谷歌翻译
市场需求紧迫,以最大程度地减少迅速伽马中子激活分析(PGNAA)光谱测量机的测试时间,以便它可以充当即时材料分析仪,例如立即对废物样品进行分类,并根据测试样品的检测成分确定最佳的回收方法。本文介绍了深度学习分类的新开发,并旨在减少PGNAA机器的测试时间。我们提出随机采样方法和类激活图(CAM)以生成“缩小”样品并连续训练CNN模型。随机采样方法(RSM)旨在减少样品中的测量时间,而类激活图(CAM)用于滤除缩小样品的不太重要的能量范围。我们将总PGNAA测量时间缩短到2.5秒,同时确保我们的数据集的精度约为96.88%,该数据集使用12种不同的物质。与分类不同的材料分类相比,具有相同元素以归档良好精度的物质需要更多的测试时间(样品计数率)。例如,铜合金的分类需要将近24秒的测试时间才能达到98%的精度。
translated by 谷歌翻译
对于环境,可持续的经济和政治原因,回收过程变得越来越重要,旨在更高的二级原材料使用。目前,对于铜和铝业,没有用于非均匀材料的非破坏性在线分析的方法。PROMP GAMMA中子激活分析(PGNAA)具有克服这一挑战的潜力。由于短期测量,使用PGNAA进行实时分类时的困难是少量嘈杂的数据。在这种情况下,使用峰值分析使用详细峰的经典评估方法失败。因此,我们建议将光谱数据视为概率分布。然后,我们可以使用最大对数可能相对于内核密度估计来对材料进行分类,并使用离散抽样来优化超参数。对于纯铝合金的测量,我们将在0.25秒以下的铝合金几乎分类。
translated by 谷歌翻译
许多应用程序中的数据遵循普通微分方程(ODE)的系统。本文提出了一种新型的算法和符号结构,用于高斯过程的协方差函数(GPS),其实现严格遵循具有恒定系数的线性均匀ODES系统,我们称之为lode-gps。将这种强的感应偏置引入GP,可以改善此类数据的建模。使用史密斯正常形式算法,一种符号技术,我们克服了技术状态中的两个当前限制:(1)在一组解决方案中需要某些唯一性条件的需求,通常在经典的ODE求解器及其概率求解器及其概率对应物中假定,以及(2)对可控系统的限制,通常在编码协方差函数中的微分方程时假设。我们显示了Lode-GP在许多实验中的有效性,例如通过最大化的可能性来学习物理解释的参数。
translated by 谷歌翻译
计算机代数可以使用符号算法回答有关部分微分方程的各种问题。但是,在计算机代数中将数据包含在方程式中很少。因此,最近,计算机代数模型与高斯流程(机器学习中的回归模型)相结合,以描述数据下某些微分方程的行为。尽管可以在这种情况下描述多项式边界条件,但我们将这些模型扩展到分析边界条件。此外,我们描述了具有某些分析系数的Weyl代数的gr \ obner和Janet碱基的必要算法。使用这些算法,我们提供了由分析功能界定并适应观察结果的域中无差流流的示例。
translated by 谷歌翻译
In recent years, several metrics have been developed for evaluating group fairness of rankings. Given that these metrics were developed with different application contexts and ranking algorithms in mind, it is not straightforward which metric to choose for a given scenario. In this paper, we perform a comprehensive comparative analysis of existing group fairness metrics developed in the context of fair ranking. By virtue of their diverse application contexts, we argue that such a comparative analysis is not straightforward. Hence, we take an axiomatic approach whereby we design a set of thirteen properties for group fairness metrics that consider different ranking settings. A metric can then be selected depending on whether it satisfies all or a subset of these properties. We apply these properties on eleven existing group fairness metrics, and through both empirical and theoretical results we demonstrate that most of these metrics only satisfy a small subset of the proposed properties. These findings highlight limitations of existing metrics, and provide insights into how to evaluate and interpret different fairness metrics in practical deployment. The proposed properties can also assist practitioners in selecting appropriate metrics for evaluating fairness in a specific application.
translated by 谷歌翻译
Classically, the development of humanoid robots has been sequential and iterative. Such bottom-up design procedures rely heavily on intuition and are often biased by the designer's experience. Exploiting the non-linear coupled design space of robots is non-trivial and requires a systematic procedure for exploration. We adopt the top-down design strategy, the V-model, used in automotive and aerospace industries. Our co-design approach identifies non-intuitive designs from within the design space and obtains the maximum permissible range of the design variables as a solution space, to physically realise the obtained design. We show that by constructing the solution space, one can (1) decompose higher-level requirements onto sub-system-level requirements with tolerance, alleviating the "chicken-or-egg" problem during the design process, (2) decouple the robot's morphology from its controller, enabling greater design flexibility, (3) obtain independent sub-system level requirements, reducing the development time by parallelising the development process.
translated by 谷歌翻译
Recent diffusion-based AI art platforms are able to create impressive images from simple text descriptions. This makes them powerful tools for concept design in any discipline that requires creativity in visual design tasks. This is also true for early stages of architectural design with multiple stages of ideation, sketching and modelling. In this paper, we investigate how applicable diffusion-based models already are to these tasks. We research the applicability of the platforms Midjourney, DALL-E 2 and StableDiffusion to a series of common use cases in architectural design to determine which are already solvable or might soon be. We also analyze how they are already being used by analyzing a data set of 40 million Midjourney queries with NLP methods to extract common usage patterns. With this insights we derived a workflow to interior and exterior design that combines the strengths of the individual platforms.
translated by 谷歌翻译
With the rise of AI and automation, moral decisions are being put into the hands of algorithms that were formerly the preserve of humans. In autonomous driving, a variety of such decisions with ethical implications are made by algorithms for behavior and trajectory planning. Therefore, we present an ethical trajectory planning algorithm with a framework that aims at a fair distribution of risk among road users. Our implementation incorporates a combination of five essential ethical principles: minimization of the overall risk, priority for the worst-off, equal treatment of people, responsibility, and maximum acceptable risk. To the best of the authors' knowledge, this is the first ethical algorithm for trajectory planning of autonomous vehicles in line with the 20 recommendations from the EU Commission expert group and with general applicability to various traffic situations. We showcase the ethical behavior of our algorithm in selected scenarios and provide an empirical analysis of the ethical principles in 2000 scenarios. The code used in this research is available as open-source software.
translated by 谷歌翻译
The number of international benchmarking competitions is steadily increasing in various fields of machine learning (ML) research and practice. So far, however, little is known about the common practice as well as bottlenecks faced by the community in tackling the research questions posed. To shed light on the status quo of algorithm development in the specific field of biomedical imaging analysis, we designed an international survey that was issued to all participants of challenges conducted in conjunction with the IEEE ISBI 2021 and MICCAI 2021 conferences (80 competitions in total). The survey covered participants' expertise and working environments, their chosen strategies, as well as algorithm characteristics. A median of 72% challenge participants took part in the survey. According to our results, knowledge exchange was the primary incentive (70%) for participation, while the reception of prize money played only a minor role (16%). While a median of 80 working hours was spent on method development, a large portion of participants stated that they did not have enough time for method development (32%). 25% perceived the infrastructure to be a bottleneck. Overall, 94% of all solutions were deep learning-based. Of these, 84% were based on standard architectures. 43% of the respondents reported that the data samples (e.g., images) were too large to be processed at once. This was most commonly addressed by patch-based training (69%), downsampling (37%), and solving 3D analysis tasks as a series of 2D tasks. K-fold cross-validation on the training set was performed by only 37% of the participants and only 50% of the participants performed ensembling based on multiple identical models (61%) or heterogeneous models (39%). 48% of the respondents applied postprocessing steps.
translated by 谷歌翻译